Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Physics-guided machine learning (PGML) has become a prevalent approach in studying scientific systems due to its ability to integrate scientific theories for enhancing machine learning (ML) models. However, most PGML approaches are tailored to isolated and relatively simple tasks, which lim- its their applicability to complex systems involving multiple interacting processes and numerous influencing features. In this paper, we propose a Physics-Guided Foundation Model (PGFM) that combines pre-trained ML models and physics- based models and leverages their complementary strengths to improve the modeling of multiple coupled processes. To effectively conduct pre-training, we construct a simulated en- vironmental system that encompasses a wide range of influ- encing features and various simulated variables generated by physics-based models. The model is pre-trained in this sys- tem to adaptively select important feature interactions guided by multi-task objectives. We then fine-tune the model for each specific task using true observations, while maintaining con- sistency with established physical theories, such as the prin- ciples of mass and energy conservation. We demonstrate the effectiveness of this methodology in modeling water temper- ature and dissolved oxygen dynamics in real-world lakes. The proposed PGFM is also broadly applicable to a range of sci- entific fields where physics-based models are being used.more » « lessFree, publicly-accessible full text available April 1, 2026
-
Graph Transformer (GT) recently has emerged as a new paradigm of graph learning algorithms, outperforming the previously popular Message Passing Neural Network (MPNN) on multiple benchmarks. Previous work shows that with proper position embedding, GT can approximate MPNN arbitrarily well, implying that GT is at least as powerful as MPNN. In this paper, we study the inverse connection and show that MPNN with virtual node (VN), a commonly used heuristic with little theoretical understanding, is powerful enough to arbitrarily approximate the self-attention layer of GT. In particular, we first show that if we consider one type of linear transformer, the so-called Performer/Linear Transformer, then MPNN+ VN with only depth and width can approximate a self-attention layer in Performer/Linear Transformer. Next, via a connection between MPNN+ VN and DeepSets, we prove the MPNN+ VN with width and depth can approximate the self-attention layer arbitrarily well, where is the input feature dimension. Lastly, under some assumptions, we provide an explicit construction of MPNN+ VN with width and depth approximating the self-attention layer in GT arbitrarily well. On the empirical side, we demonstrate that 1) MPNN+ VN is a surprisingly strong baseline, outperforming GT on the recently proposed Long Range Graph Benchmark (LRGB) dataset, 2) our MPNN+ VN improves over early implementation on a wide range of OGB datasets and 3) MPNN+ VN outperforms Linear Transformer and MPNN on the climate modeling task.more » « less
An official website of the United States government

Full Text Available